The real-world data tends to be heavily imbalanced and severely skew the data-driven deep neural networks, which makes Long-Tailed Recognition (LTR) a massive challenging task. Existing LTR methods seldom train Vision Transformers (ViTs) with Long-Tailed (LT) data, while the off-the-shelf pretrain weight of ViTs always leads to unfair comparisons. In this paper, we systematically investigate the ViTs' performance in LTR and propose LiVT to train ViTs from scratch only with LT data. With the observation that ViTs suffer more severe LTR problems, we conduct Masked Generative Pretraining (MGP) to learn generalized features. With ample and solid evidence, we show that MGP is more robust than supervised manners. In addition, Binary Cross Entropy (BCE) loss, which shows conspicuous performance with ViTs, encounters predicaments in LTR. We further propose the balanced BCE to ameliorate it with strong theoretical groundings. Specially, we derive the unbiased extension of Sigmoid and compensate extra logit margins to deploy it. Our Bal-BCE contributes to the quick convergence of ViTs in just a few epochs. Extensive experiments demonstrate that with MGP and Bal-BCE, LiVT successfully trains ViTs well without any additional data and outperforms comparable state-of-the-art methods significantly, e.g., our ViT-B achieves 81.0% Top-1 accuracy in iNaturalist 2018 without bells and whistles. Code is available at https://github.com/XuZhengzhuo/LiVT.
translated by 谷歌翻译
图像检索已成为一种越来越有吸引力的技术,具有广泛的多媒体应用前景,在该技术中,深层哈希是朝着低存储和有效检索的主要分支。在本文中,我们对深度学习中的度量学习进行了深入的研究,以在多标签场景中建立强大的度量空间,在多标签场景中,两人的损失遭受了高度计算的开销和汇聚难度,而代理损失理论上是无法表达的。深刻的标签依赖性和在构造的超球场空间中表现出冲突。为了解决这些问题,我们提出了一个新颖的度量学习框架,该框架具有混合代理损失(hyt $^2 $损失),该框架构建了具有高效训练复杂性W.R.T.的表现力度量空间。整个数据集。拟议的催眠$^2 $损失着重于通过可学习的代理和发掘无关的数据与数据相关性来优化超晶体空间,这整合了基于成对方法的足够数据对应关系以及基于代理方法的高效效率。在四个标准的多标签基准上进行的广泛实验证明,所提出的方法优于最先进的方法,在不同的哈希片中具有强大的功能,并且以更快,更稳定的收敛速度实现了显着的性能增长。我们的代码可从https://github.com/jerryxu0129/hyp2-loss获得。
translated by 谷歌翻译
3D面重建结果的评估通常取决于估计的3D模型和地面真相扫描之间的刚性形状比对。我们观察到,将两个形状与不同的参考点进行排列可以在很大程度上影响评估结果。这给精确诊断和改进3D面部重建方法带来了困难。在本文中,我们提出了一种新的评估方法,并采用了新的基准测试,包括100张全球对齐的面部扫描,具有准确的面部关键点,高质量的区域口罩和拓扑符合的网格。我们的方法执行区域形状比对,并导致计算形状误差期间更准确,双向对应关系。细粒度,区域评估结果为我们提供了有关最先进的3D面部重建方法表现的详细理解。例如,我们对基于单图像的重建方法的实验表明,DECA在鼻子区域表现最好,而Ganfit在脸颊区域的表现更好。此外,使用与我们构造的相同过程以对齐和重新构造几个3D面部数据集的新型和高质量的3DMM基础HIFI3D ++。我们将在https://realy3dface.com上发布真正的HIFI3D ++以及我们的新评估管道。
translated by 谷歌翻译
深度散列在大规模图像检索中显示了有希望的性能。然而,由\ textBF {d} EEP \ TextBF {n} EETURT \ TextBF {n} etwork(DNN)提取的潜在代码将在二值化过程中不可避免地丢失语义信息,这损害了检索效率并使其充满挑战。虽然许多现有方法进行正规化以缓解量化错误,但我们弄清楚了度量和量化损耗之间的不兼容冲突。公制损失惩罚了阶级距离,以推动远处的不受约束的不同类别。更糟糕的是,它倾向于映射潜在的代码偏离理想的二值化点,并在二值化过程中产生严重的模糊性。基于二进制线性代码的最小距离,提出了提出基于二进制线性代码的最小距离,\ textbf {h}灰色引导\ textbf {h} Inge \ textbf {f}发射(hhf)以避免这种冲突。详细说明,我们仔细设计了一个特定的拐点,依赖于散列长度和类别号来平衡度量学习和量化学习。这种修改可防止网络落入深度散列中的局部度量最佳最小值。在CiFAR-10,CIFAR-100,ImageNet和MS-Coco中的广泛实验表明,HHF始终如一地优于现有技术,并且将其移植到其他方法中是坚固且柔韧的。
translated by 谷歌翻译
基于示例的基于彩色方法依赖于参考图像来为目标灰度图像提供合理的颜色。基于示例的颜色的关键和难度是在这两个图像之间建立准确的对应关系。以前的方法已经尝试构建这种对应关系,而是面临两个障碍。首先,使用用于计算对应的亮度通道是不准确的。其次,它们构建的密集信件引入了错误的匹配结果并提高了计算负担。为了解决这两个问题,我们提出了语义 - 稀疏的彩色网络(SSCN)以粗细的方式将全局图像样式和详细的语义相关颜色传输到灰度图像。我们的网络可以完全平衡全局和本地颜色,同时减轻了暧昧的匹配问题。实验表明,我们的方法优于定量和定性评估的现有方法,实现了最先进的性能。
translated by 谷歌翻译
现实世界数据普遍面对严重的类别 - 不平衡问题,并且展示了长尾分布,即,大多数标签与有限的情况有关。由此类数据集监督的NA \“IVE模型更愿意占主导地位标签,遇到严重的普遍化挑战并变得不佳。我们从先前的角度提出了两种新的方法,以减轻这种困境。首先,我们推导了一个以平衡为导向的数据增强命名均匀的混合物(Unimix)促进长尾情景中的混合,采用先进的混合因子和采样器,支持少数民族。第二,受贝叶斯理论的动机,我们弄清了贝叶斯偏见(北美),是由此引起的固有偏见先前的不一致,并将其补偿为对标准交叉熵损失的修改。我们进一步证明了所提出的方法理论上和经验地确保分类校准。广泛的实验验证我们的策略是否有助于更好校准的模型,以及他们的策略组合在CIFAR-LT,ImageNet-LT和Inattations 2018上实现最先进的性能。
translated by 谷歌翻译
时尚预测学习(ST-PL)是具有许多应用的热点,例如物体运动和气象预测。它旨在通过观察到的序列来预测后续帧。然而,连续框架中固有的不确定性加剧了长期预测的难度。为了解决预测期间增加的歧义,我们设计CMS-LSTM,专注于上下文相关性和多尺度的时空流,详细含有两种精细植入的本地,其中包含两个精心设计的块:上下文嵌入(CE)和时尚表达(SE)块。 CE专为丰富的上下文互动而设计,而SE专注于隐藏状态的多尺度时空表达。新引入的块还促进了其他时空模型(例如,PEIPrn,SA-COMMLSTM),以产生ST-PL的代表性隐式特征,提高预测质量。定性和定量实验证明了我们所提出的方法的有效性和灵活性。具有较少的参数,CMS-LSTM在两个代表性基准和场景上的指标中占据了最先进的方法。
translated by 谷歌翻译
Existing 3D-aware image synthesis approaches mainly focus on generating a single canonical object and show limited capacity in composing a complex scene containing a variety of objects. This work presents DisCoScene: a 3Daware generative model for high-quality and controllable scene synthesis. The key ingredient of our method is a very abstract object-level representation (i.e., 3D bounding boxes without semantic annotation) as the scene layout prior, which is simple to obtain, general to describe various scene contents, and yet informative to disentangle objects and background. Moreover, it serves as an intuitive user control for scene editing. Based on such a prior, the proposed model spatially disentangles the whole scene into object-centric generative radiance fields by learning on only 2D images with the global-local discrimination. Our model obtains the generation fidelity and editing flexibility of individual objects while being able to efficiently compose objects and the background into a complete scene. We demonstrate state-of-the-art performance on many scene datasets, including the challenging Waymo outdoor dataset. Project page: https://snap-research.github.io/discoscene/
translated by 谷歌翻译
Modern autonomous driving system is characterized as modular tasks in sequential order, i.e., perception, prediction and planning. As sensors and hardware get improved, there is trending popularity to devise a system that can perform a wide diversity of tasks to fulfill higher-level intelligence. Contemporary approaches resort to either deploying standalone models for individual tasks, or designing a multi-task paradigm with separate heads. These might suffer from accumulative error or negative transfer effect. Instead, we argue that a favorable algorithm framework should be devised and optimized in pursuit of the ultimate goal, i.e. planning of the self-driving-car. Oriented at this goal, we revisit the key components within perception and prediction. We analyze each module and prioritize the tasks hierarchically, such that all these tasks contribute to planning (the goal). To this end, we introduce Unified Autonomous Driving (UniAD), the first comprehensive framework up-to-date that incorporates full-stack driving tasks in one network. It is exquisitely devised to leverage advantages of each module, and provide complementary feature abstractions for agent interaction from a global perspective. Tasks are communicated with unified query design to facilitate each other toward planning. We instantiate UniAD on the challenging nuScenes benchmark. With extensive ablations, the effectiveness of using such a philosophy is proven to surpass previous state-of-the-arts by a large margin in all aspects. The full suite of codebase and models would be available to facilitate future research in the community.
translated by 谷歌翻译
As a powerful representation of 3D scenes, the neural radiance field (NeRF) enables high-quality novel view synthesis from multi-view images. Stylizing NeRF, however, remains challenging, especially on simulating a text-guided style with both the appearance and the geometry altered simultaneously. In this paper, we present NeRF-Art, a text-guided NeRF stylization approach that manipulates the style of a pre-trained NeRF model with a simple text prompt. Unlike previous approaches that either lack sufficient geometry deformations and texture details or require meshes to guide the stylization, our method can shift a 3D scene to the target style characterized by desired geometry and appearance variations without any mesh guidance. This is achieved by introducing a novel global-local contrastive learning strategy, combined with the directional constraint to simultaneously control both the trajectory and the strength of the target style. Moreover, we adopt a weight regularization method to effectively suppress cloudy artifacts and geometry noises which arise easily when the density field is transformed during geometry stylization. Through extensive experiments on various styles, we demonstrate that our method is effective and robust regarding both single-view stylization quality and cross-view consistency. The code and more results can be found in our project page: https://cassiepython.github.io/nerfart/.
translated by 谷歌翻译